糖尿病性视网膜病(DR)和糖尿病黄斑水肿(DME)是全球永久失明的主要原因。在临床实践中设计具有良好泛化能力的自动分级系统至关重要。但是,先前的工作是独立的DR或DME等级,而无需考虑它们之间的内部相关性,或者通过共享特征表示共同对其进行分级,但忽略了由困难的样本和数据偏见引起的潜在概括问题。为了解决这些问题,我们提出了一个与动态难度意识的加权损失(DAW)和双流式分离的学习体系结构(分离)的框架。受课程学习的启发,DAW通过适应性地测量难度从简单的样本学习到困难样本。分离分离分级任务的特征,以避免潜在地强调偏见。通过添加DAW和Decarach,该模型学习了鲁棒的分离特征表示,以探索DR和DME之间的内部相关性并实现更好的分级性能。在三个基准测试的实验显示了我们框架内框架和跨数据库测试的有效性和鲁棒性。
translated by 谷歌翻译
Contrastive language-image pretraining (CLIP) links vision and language modalities into a unified embedding space, yielding the tremendous potential for vision-language (VL) tasks. While early concurrent works have begun to study this potential on a subset of tasks, important questions remain: 1) What is the benefit of CLIP on unstudied VL tasks? 2) Does CLIP provide benefit in low-shot or domain-shifted scenarios? 3) Can CLIP improve existing approaches without impacting inference or pretraining complexity? In this work, we seek to answer these questions through two key contributions. First, we introduce an evaluation protocol that includes Visual Commonsense Reasoning (VCR), Visual Entailment (SNLI-VE), and Visual Question Answering (VQA), across a variety of data availability constraints and conditions of domain shift. Second, we propose an approach, named CLIP Targeted Distillation (CLIP-TD), to intelligently distill knowledge from CLIP into existing architectures using a dynamically weighted objective applied to adaptively selected tokens per instance. Experiments demonstrate that our proposed CLIP-TD leads to exceptional gains in the low-shot (up to 51.9%) and domain-shifted (up to 71.3%) conditions of VCR, while simultaneously improving performance under standard fully-supervised conditions (up to 2%), achieving state-of-art performance on VCR compared to other single models that are pretrained with image-text data only. On SNLI-VE, CLIP-TD produces significant gains in low-shot conditions (up to 6.6%) as well as fully supervised (up to 3%). On VQA, CLIP-TD provides improvement in low-shot (up to 9%), and in fully-supervised (up to 1.3%). Finally, CLIP-TD outperforms concurrent works utilizing CLIP for finetuning, as well as baseline naive distillation approaches. Code will be made available.
translated by 谷歌翻译
机器学习方法最近在求解部分微分方程(PDE)中的承诺。它们可以分为两种广泛类别:近似解决方案功能并学习解决方案操作员。物理知识的神经网络(PINN)是前者的示例,而傅里叶神经操作员(FNO)是后者的示例。这两种方法都有缺点。 Pinn的优化是具有挑战性,易于发生故障,尤其是在多尺度动态系统上。 FNO不会遭受这种优化问题,因为它在给定的数据集上执行了监督学习,但获取此类数据可能太昂贵或无法使用。在这项工作中,我们提出了物理知识的神经运营商(Pino),在那里我们结合了操作学习和功能优化框架。这种综合方法可以提高PINN和FNO模型的收敛速度和准确性。在操作员学习阶段,Pino在参数PDE系列的多个实例上学习解决方案操作员。在测试时间优化阶段,Pino优化预先训练的操作员ANSATZ,用于PDE的查询实例。实验显示Pino优于许多流行的PDE家族的先前ML方法,同时保留与求解器相比FNO的非凡速度。特别是,Pino准确地解决了挑战的长时间瞬态流量,而其他基线ML方法无法收敛的Kolmogorov流程。
translated by 谷歌翻译
在本文中,我们研究了使用深丽升方法(DRM)和物理信息的神经网络(Pinns)从随机样品求解椭圆局部微分方程(PDE)的深度学习技术的统计限制。为了简化问题,我们专注于原型椭圆PDE:SCHR \“odinginger方程,具有零的Dirichlet边界条件,其在量子 - 机械系统中具有广泛的应用。我们为两种方法建立了上下界,通过快速速率泛化绑定并发地改善了这个问题的上限。我们发现当前的深ritz方法是次优的,提出修改版本。我们还证明了Pinn和DRM的修改版本可以实现Minimax SoboLev空间的最佳限制。经验上,近期工作表明,根据权力法,我们提供了培训训练的深层模型精度,我们提供了计算实验,以显示对深PDE求解器的尺寸依赖权力法的类似行为。
translated by 谷歌翻译
From a visual scene containing multiple people, human is able to distinguish each individual given the context descriptions about what happened before, their mental/physical states or intentions, etc. Above ability heavily relies on human-centric commonsense knowledge and reasoning. For example, if asked to identify the "person who needs healing" in an image, we need to first know that they usually have injuries or suffering expressions, then find the corresponding visual clues before finally grounding the person. We present a new commonsense task, Human-centric Commonsense Grounding, that tests the models' ability to ground individuals given the context descriptions about what happened before, and their mental/physical states or intentions. We further create a benchmark, HumanCog, a dataset with 130k grounded commonsensical descriptions annotated on 67k images, covering diverse types of commonsense and visual scenes. We set up a context-object-aware method as a strong baseline that outperforms previous pre-trained and non-pretrained models. Further analysis demonstrates that rich visual commonsense and powerful integration of multi-modal commonsense are essential, which sheds light on future works. Data and code will be available https://github.com/Hxyou/HumanCog.
translated by 谷歌翻译
Deep neural networks still struggle on long-tailed image datasets, and one of the reasons is that the imbalance of training data across categories leads to the imbalance of trained model parameters. Motivated by the empirical findings that trained classifiers yield larger weight norms in head classes, we propose to reformulate the recognition probabilities through included angles without re-balancing the classifier weights. Specifically, we calculate the angles between the data feature and the class-wise classifier weights to obtain angle-based prediction results. Inspired by the performance improvement of the predictive form reformulation and the outstanding performance of the widely used two-stage learning framework, we explore the different properties of this angular prediction and propose novel modules to improve the performance of different components in the framework. Our method is able to obtain the best performance among peer methods without pretraining on CIFAR10/100-LT and ImageNet-LT. Source code will be made publicly available.
translated by 谷歌翻译
Visual commonsense understanding requires Vision Language (VL) models to not only understand image and text but also cross-reference in-between to fully integrate and achieve comprehension of the visual scene described. Recently, various approaches have been developed and have achieved high performance on visual commonsense benchmarks. However, it is unclear whether the models really understand the visual scene and underlying commonsense knowledge due to limited evaluation data resources. To provide an in-depth analysis, we present a Multimodal Evaluation (ME) pipeline to automatically generate question-answer pairs to test models' understanding of the visual scene, text, and related knowledge. We then take a step further to show that training with the ME data boosts the model's performance in standard VCR evaluation. Lastly, our in-depth analysis and comparison reveal interesting findings: (1) semantically low-level information can assist the learning of high-level information but not the opposite; (2) visual information is generally under utilization compared with text.
translated by 谷歌翻译
信号时间逻辑的鲁棒性不仅评估信号是否遵守规范,而且还提供了对公式的满足或违反的量度。鲁棒性的计算基于评估潜在谓词的鲁棒性。但是,通常以无模型方式(即不包括系统动力学)定义谓词的鲁棒性。此外,精确定义复杂谓词的鲁棒性通常是不平凡的。为了解决这些问题,我们提出了模型预测鲁棒性的概念,该概念通过考虑基于模型的预测,它与以前的方法相比提供了一种更系统的评估鲁棒性的方法。特别是,我们使用高斯过程回归来基于预定的预测来学习鲁棒性,以便可以在线上有效地计算鲁棒性值。我们评估了对自动驾驶用例的方法,该案例用在记录的数据集上使用形式的交通规则中使用的谓词来评估我们的方法,这与传统方法相比,在表达性方面相比,我们的方法优势。通过将我们的鲁棒性定义纳入轨迹规划师,自动驾驶汽车比数据集中的人类驾驶员更强大地遵守交通规则。
translated by 谷歌翻译
大规模的多模式对比预训练已经证明了通过将多种模式映射到共享嵌入空间中的一系列下游任务的可转移功能。通常,这对每种模式都采用了单独的编码器。但是,最近的工作表明,变形金刚可以支持跨多种方式学习并允许知识共享。受此启发,我们研究了各种模式共享的对比语言图像预训练(MS-CLIP)框架。更具体地说,我们质疑在对比预训练期间可以在跨模态共享变压器模型的多少个参数,并严格检查建筑设计选择,以将沿频谱共享的参数比例定位。在研究的条件下,我们观察到,视觉和语言信号的主要统一编码器优于所有其他分离更多参数的变体。此外,我们发现特定于特定于模态的平行模块进一步提高了性能。实验结果表明,所提出的MS-CLIP方法在零摄像机分类中(在YFCC-100M上进行了预训练)中,最多可超过13 \%相对的香草夹,同时支持降低参数。此外,在24个下游视觉任务的集合中,我们的方法在线性探测中优于Vanilla剪辑。此外,我们发现共享参数导致语义概念来自不同方式在嵌入空间中更接近地编码,从而促进了共同的语义结构(例如注意力模式)从语言到视觉的传递。代码可在\ href {https://github.com/hxyou/msclip} {url}中获得。
translated by 谷歌翻译
视频场景图(Vidsgg)旨在将视频内容解析到场景图中,其中涉及对视频中的时尚上下文信息进行建模。但是,由于数据集中的长尾训练数据,现有Vidsgg模型的概括性能可能会受到时空条件偏置问题的影响。在这项工作中,从元学习的角度来看,我们提出了一个新颖的元视频场景图(MVSGG)框架来解决这种偏见问题。具体而言,要处理各种类型的时空条件偏差,我们的框架首先构建了一个支持集和一组查询集,其中每个查询集的数据分布与支持集W.R.T.的数据分布不同。一种条件偏见。然后,通过执行新颖的元训练和测试过程,以优化模型,以在支持集的训练后在这些查询集上获得良好的测试性能,我们的框架可以有效地指导该模型学会对偏见进行良好的概括。广泛的实验证明了我们提出的框架的功效。
translated by 谷歌翻译